A Belief-Based Multi-Agent Markov Decision Process for Staff Management

نویسندگان

  • Philip H.P. Nguyen
  • Minh-Quang Nguyen
  • Ken Kaneiwa
چکیده

This paper presents a system designed for task allocation, staff management and decision support in a large enterprise, in which permanent staff and contractors work alongside under the overall management of a manager to handle tasks initiated by end-users. The process of allocating a new task to a worker is modeled under different situations, taking into account user requirements as well as the different goals of management, permanent staff and contractors. Their actions and strategies are formalized as autonomous decision-support subsystems inside a multi-agent system, based on Contract Net Protocol, belief theory, multiobjective optimization theory and Markov Decision Process.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Utilizing Generalized Learning Automata for Finding Optimal Policies in MMDPs

Multi agent Markov decision processes (MMDPs), as the generalization of Markov decision processes to the multi agent case, have long been used for modeling multi agent system and are used as a suitable framework for Multi agent Reinforcement Learning. In this paper, a generalized learning automata based algorithm for finding optimal policies in MMDP is proposed. In the proposed algorithm, MMDP ...

متن کامل

Decayed Markov Chain Monte Carlo for Interactive POMDPs

To act optimally in a partially observable, stochastic and multi-agent environment, an autonomous agent needs to maintain a belief of the world at any given time. An extension of partially observable Markov decision processes (POMDPs), called interactive POMDPs (I-POMDPs), provides a principled framework for planning and acting in such settings. I-POMDP augments the POMDP beliefs by including m...

متن کامل

Simulating Sequential Decision-Making Process of Base- Agent Actions in a Multi Agent-Based Economic Landscape (MABEL) Model

In this paper, we present the use of sequential decision-making process simulations for base agents in our multi-agent based economic landscape (MABEL) model. The sequential decision-making process described here is a data-driven Markov-Decision Problem (MDP) integrated with stochastic properties. Utility acquisition attributes in our model are generated for each time step of the simulation. We...

متن کامل

Anytime Point Based Approximations for Interactive POMDPs

Partially observable Markov decision processes (POMDPs) have been largely accepted as a rich-framework for planning and control problems. In settings where multiple agents interact POMDPs prove to be inadequate. The interactive partially observable Markov decision process (I-POMDP) is a new paradigm that extends POMDPs to multiagent settings. The added complexity of this model due to the modeli...

متن کامل

A Framework for Sequential Planning in Multi-Agent Settings

This paper extends the framework of partially observable Markov decision processes (POMDPs) to multi-agent settings by incorporating the notion of agent models into the state space. Agents maintain beliefs over physical states of the environment and over models of other agents, and they use Bayesian update to maintain their beliefs over time. The solutions map belief states to actions. Models o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011